home *** CD-ROM | disk | FTP | other *** search
- STANDARD PROCEDURAL DATABASES, by Eric Haines, 3D/Eye, Inc.
-
- [Created while under contract to Hewlett-Packard FSD and HP Laboratories]
- Version 2.4, as of 5/1/88
- address: 3D/Eye, Inc., 410 East Upland Road, Ithaca, NY 14850
- e-mail: hpfcla!hpfcrs!eye!erich@hplabs.HP.COM
-
- History
- -------
- This software package by Eric Haines is not copyrighted and can be used freely.
- Versions 1.0 to 1.5 released February to June, 1987 for testing.
- Version 2.0 released July, 1987.
- Version 2.1 released August, 1987 - corrected info on speed of the HP 320,
- other minor changes to README.
- Version 2.2 released November, 1987 - shortened file names to <=12 characters,
- procedure names to <= 32 characters, and ensured that all lines are <= 80
- characters (including return character).
- Version 2.3 released March, 1988 - corrected gears.c to avoid interpenetration,
- corrected and added further instructions and global statistics for ray
- tracing to README.
- Version 2.4 released May, 1988 - fixed hashing function for mountain.c.
-
- {This file uses tab characters worth 8 spaces}
-
-
- Introduction
- ------------
-
- This software is meant to act as a set of basic test images for ray tracing
- and hidden surface algorithms. The programs generate databases of objects
- which are fairly familiar and "standard" to the graphics community, such as a
- fractal mountain, a tree, a recursively built tetrahedral structure, etc. I
- created them originally for my own testing of a ray-tracer. My hope is that
- these will be used by researchers to test algorithms. In this way, research on
- algorithmic improvements can be compared on a more standard set of measures.
- At present, one researcher ray-traces a car, another a tree, and the question
- arises "How many cars to the tree?" With these databases we may be comparing
- oranges and apples ("how many hypercubes to a timeshared VAX?"), but it sure
- beats comparing oranges and orangutans.
-
- Another use for these databases is to find how many vectors or polygons per
- second a particular machine can actually generate. Talking with people
- at SIGGRAPH '86, I found that various vendors would say "well, they claim
- a rate of 5000 polygons/second, but their polygons are not our polygons".
- With these tests, their polygons are our polygons. I admit a bias towards the
- structure of the HP-SRX ("Renaissance") display system and its display software
- "Starbase" in the way I structured the testing conditions (laid out later),
- but this comes fairly close to preliminary PHIGS+ standards. For example,
- these databases are not kind to machines using the painter's algorithm, but I
- think this algorithm is inappropriate for good, general 3D modeling hardware.
- These databases do not use polygons where an intensity or color is given for
- each vertex, though such polygons are a feature of some machines (including
- the SRX). They do include polygonal patches, where a normal is given for each
- vertex, as this feature seems fairly common. Basically, if display hardware
- cannot provide 8 light sources, use of normals at vertices, specular shading,
- z-buffering, and perspective transformations, then I consider this machine
- somewhat ineffective for use in a 3D modeling environment (which is mostly all
- I care about--my personal bias). If you do not, please tell me why!
-
- The images for these databases and other information about them can be
- found in "A Proposal for Standard Graphics Environments," IEEE Computer
- Graphics & Applications, November 1987, pages 3-5. A corrected image of the
- tree appears in IEEE CG&A, January 1987, page 18.
-
- At present, the SPD package is available for the IBM PC on 360K 5.25"
- floppy for $5 from: MicroDoc, c/o F.W. Pospeschil, 3108 Jackson Street,
- Bellevue, Nebraska 68005.
-
-
- File Structure
- --------------
-
- Six different procedural database generators are included. These were
- designed to span a fair range of primitives, modeling structures, lighting
- and surface conditions, background conditions, and other factors. A complexity
- factor is provided within each program to control the size of the database
- generated.
-
- This software package contains the following files:
-
- README - what you are now reading
- makefile - used to make the programs (in HP-UX Unix)
- def.h - some useful "C" definitions
- lib.h - globals and library routine declarations
- lib.c - library of routines
- balls.c - fractal ball object generator
- gears.c - 3D array of interlocking gears
- mountain.c - fractal mountain and 4 glass ball generator
- rings.c - pyramid of dodecahedral rings generator
- tetra.c - recursive tetrahedra generator
- tree.c - tree generator
-
- The compiled and linked programs will output a database in ASCII to stdout
- containing viewing parameters, lighting conditions, material properties, and
- geometric information. The data format is called the 'neutral file format'
- (or NFF) and is outlined in lib.c. This format is meant to be minimal and
- (hopefully) easy to attach to a user-written filter program which will convert
- the output into a file format of your choosing.
-
- Either of two sets of primitives can be selected for output. If
- OUTPUT_FORMAT is defined as OUTPUT_CURVES, the primitives are spheres, cones,
- cylinders, and polygons. If OUTPUT_FORMAT is set to OUTPUT_PATCHES, the
- primitives are simply polygonal patches and polygons (i.e. all other primitives
- are polygonalized). In this case OUTPUT_RESOLUTION is used to set the amount
- of polygonalization of non-polygonal primitives. In general, OUTPUT_CURVES is
- used for ray-trace timing tests, and OUTPUT_PATCHES for hidden surface and
- other polygon-based algorithm timings.
-
- SIZE_FACTOR is used to control the overall size of the database. Default
- values have been chosen such that the maximum number of primitives less than
- 10,000 is output. One purpose of SIZE_FACTOR is to avoid limiting the uses of
- these databases. Depending on the research being done and the computing
- facilities available, a larger or smaller number of primitives may be desired.
- SIZE_FACTOR can also be used to show how an algorithm's time changes as the
- complexity increases.
-
- Other parameters (for example, branching angles for "tree.c" and the
- fractal dimension in "mountain.c") are included for your own enjoyment, and so
- normally should not be changed if the database is used for timing tests.
-
- Note that light intensities, ambient components, etc. are not specified.
- These may be set however you prefer. The thrust of these databases is the
- testing of rendering speeds, and so the actual color should not affect these
- calculations. An ambient component should be used for all objects. A simple
- formula for an intensity for each light and the ambient component is the
- following: sqrt(# lights) / (# lights * 2).
-
-
- Database Analysis
- -----------------
-
- The databases "mountain", and "tetra" consist of primitives of about the
- same size in a localized cluster, "balls" and "tree" are more varied clusters,
- and "rings" and "gears" are somewhat space-filling for the eye rays. Some
- other facts about these databases:
-
- balls gears mountain rings tetra tree
- ----- ----- -------- ----- ----- ----
- primitives SP P PS YSP P OSP
- (where S=sphere, P=polygon, Y=cylinder, O=cone, in # of objects order)
- total prim. 7382 9345 8196 8401 4096 8191
- polys 1417K 9345 8960 874K 4096 852K
- rend. polys 707K 4448 5275 435K 2496 426K
- ave. poly 0.26 173.6 53.2 2.95 35.4 0.09
- vectors 4251K 55300 26880 2688K 12288 2621K
- rend. vectors 4251K 54692 26074 2688K 12288 2621K
- ave. vector 0.59 7.6 12.5 2.57 11.1 0.26
- ave. # edges 3.00 5.92 3.00 3.08 3.00 3.08
-
- lights 3 5 1 3 1 7
- background 0% 7% 34% 0% 81% 35%
- specular yes yes yes yes no no
- transmitter no yes yes no no no
-
- eye hit rays 263169 245532 173422 263169 49806 170134
- reflect rays 175616 305561 355355 316621 0 0
- refract rays 0 208153 355355 0 0 0
- shadow rays 954544 2126105 362657 1087366 46150 1099748
-
- "total prim." is the total number of ray-tracing primitives (polygons,
- spheres, cylinders and cones) in the scene. The number of polygons and vectors
- generated is a function of the OUTPUT_RESOLUTION. The default value for this
- parameter is 4 for all databases.
-
- "polys" is the total number of polygons and polygonal patches generated
- when using OUTPUT_PATCHES. "rend. polys" is the number of polygons actually
- sent to the z-buffer (i.e. not culled and not fully clipped). "ave. poly" is
- the average rendered polygon size in pixels for a 512 x 512 resolution. Note
- that this size is the average of the number of pixels put on the screen by all
- unculled, not fully clipped polygons. Culled polygons are not counted, nor are
- pieces of polygons which are off-screen (clipped). For this statistic, all
- transparent objects are considered opaque. The area was calculated directly
- from the exact transformed vertices, not from the screen. Note that in
- practice there usually are more pixels per polygon actually rendered, mostly
- due to the polygon edges being fully rendered. It is because of this machine
- dependency that the purely geometric area is given.
-
- "vectors" is the total number of vectors generated. "rend. vector" is the
- number of vectors which were not fully clipped off the screen (note that no
- culling takes place in vector testing). "ave. vector" is similar to "ave.
- poly", being the average length of all rendered vectors in pixels. Note that
- culling is not performed for vectors. Again, the actual number of pixels
- rendered is machine dependent and could be different than this average vector
- length.
-
- "lights" is simply the number of lights in a scene. "background" is the
- percentage of background color (empty space) seen directly by the eye for the
- given view. It is calculated by "1 - ( eye hit rays/(513*513) )", since
- 513 x 513 rays are generated from the eye. "specular" tells if there are
- reflective objects in the scene, and "transmitter" if there are transparent
- objects.
-
- "eye hit rays" is the number of rays from the eye which actually hit an
- object (i.e. not the background). "reflect rays" is the total number of rays
- generated by reflection off of reflecting and transmitting surfaces. "refract
- rays" is the number of rays generated by transmitting surfaces. "shadow rays"
- is the sum total of rays shot towards the lights. Note that if a surface is
- facing away from a light, or the background is hit, a light ray is not formed.
- The numbers given can vary noticeably from a given ray tracer, but should all
- be within about 10%.
-
- "K" means exactly 1000 (not 1024), with number rounded to the nearest K.
-
-
- Testing Procedures
- ------------------
-
- Below are listed the requirements for testing various algorithms. These test
- conditions should be realizable by most systems, and are meant to represent a
- common mode of operation for each algorithm. Special features which the
- hardware supports (or standard features which it lacks) should be noted with
- the statistics.
-
-
- Hardware Vector Testing:
-
- 1) Two separate tests should be performed. One test should be done at
- a resolution of 512 x 512. The second test should be done at the
- maximum square resolution representable on the screen (e.g. the
- HP SRX's resolution is 1280 x 1024, so run the tests at 1024 x 1024).
- The first test is done so that the same sized polygons are used in
- tests. The second test is done to test the display hardware using a
- more commonly used image resolution.
-
- 2) At least 24 bit planes should be used for color rendering, if
- available. If not, this should be noted and the best mapping mode
- available should be used.
-
- 3) Vectors should be flat-shaded, with no hidden line removal performed
- and no depth-cueing (but note these features if available).
-
- 4) The largest unit of display is the polygon. This means that no
- queueing of edge lists or optimization of move/draw commands can be
- performed by the user. All polygons must be rendered. This will mean
- that many edges will be drawn twice (since they are shared by
- polygons).
-
- 5) Timings will consist of only the time spent on the polygon draw calls,
- including the overhead of calling the display routine itself. This
- goal can most easily be realized by timing two versions of the testing
- program. One version reads the database and displays the results
- normally. The second version has all the system supplied vector
- drawing commands removed, and so will output no vectors. We can
- determine the time spent actually displaying vectors by taking the
- difference of these two timings. This time divided by the total number
- of vectors created by the database (listed in the table as "vectors")
- is the vector rate for the database.
-
- 6) Two separate rates should be calculated from the average rates
- computed. One is the display rate, which is the average of the gears,
- mountain, and tetra databases; the other is the throughput rate, which
- is the average of the balls, rings, and tree databases. The difference
- is that the databases used to calculate the display rate are more
- realistic as far as average polygon size presently used in hidden
- surface comparisons. This rate should better reflect the day-to-day
- performance expected of a machine. The databases for the throughput
- rate are characterized by very small polygons, and so will reflect the
- fastest speed of the hardware, as the effort to actually put the
- vectors on the screen should be minimal.
-
-
- Hardware Shaded Hidden Surface Testing:
-
- 1) As in hardware vector testing, two tests should be performed at the
- two resolutions. A display and throughput rate should be calculated.
-
- 2) All polygons in the databases are guaranteed to be planar. Polygons
- do not interpenetrate, and only in "rings.c" does cyclical overlapping
- occur. The first three points of a polygon always form an angle less
- than 180 degrees, and so can be used to compute the surface normal.
- This computation should be done by the hardware if possible (and if
- not, should be noted as such).
-
- 3) Polygons are one-sided for all databases (though transparent objects
- may have to be treated as two-sided), and so the hardware can use the
- surface normal to cull. It is not valid to cull (or perform any other
- clipping or shading function) using the software.
-
- 4) Polygons should be displayed using hidden surface rendering methods.
- No software sorting or other irregular manipulation of the data is
- allowed: the polygons should be rendered on a first-come first-served
- basis.
-
- 5) Up to seven light sources and an ambient contribution are used in the
- databases. If the hardware does not have this many, note it and test
- with the first few lights. Lights do not fall off with distance.
- Shadowing is not performed (though it's an incredible feature if it
- is).
-
- 6) Light sources are positional. If unavailable, assign the directional
- lights a vector given by the light position and the viewing "lookat"
- position.
-
- 7) Specular highlighting should be performed for surfaces with a specular
- component. The simple Phong distribution model is sufficient (though
- note improvements, if any).
-
- 8) Polygonal patches should have smooth shading, if available. In this
- case a Gouraud interpolation of the colors calculated at the vertices
- is the simplest algorithm allowed (again, note any more advanced
- algorithms).
-
- 9) Transparency should be used if available, and the technique explained.
- For example, the HP SRX uses "screen door" transparency, with a fill
- pattern of a checkerboard. Transparent objects are then rendered by
- filling every other pixel.
-
- 10) Overflow of color calculations is not guaranteed for these databases.
- As such, this should be handled by scaling to the maximum color
- contribution (e.g. r/g/b of 2 4 5 gets scaled to 0.4 0.8 1.0) if
- available. If not, simple clamping should be used and noted (e.g. an
- r/g/b of 0.8 1.1 1.5 is output as 0.8 1.0 1.0).
-
- 11) As in vector testing, timings should be done only for the actual
- polygon rendering routines. This means such costs as changing fill
- color, vertex format, or other state record information is not counted
- in the timing performed, but rather a flat-out speed test of the
- shaded polygon rate is made.
-
-
- Other Hardware Tests:
-
- A number of tests which could also be performed, but that I have ignored
- for now, include the following:
-
- 1) Flat-shaded rendering: render polygons using only their fill colors.
- Such renderings will look like mush, but would be useful for showing
- the difference in speed for a flat-shade vs. shaded rendering.
-
- 2) Hidden-line rendering: render flat shaded polygons the same color as
- the background, with edges a different color. Culling can be used to
- eliminate polygons.
-
- 3) No polypatch rendering: render polygonal patches but ignore their
- normal per vertex values (i.e. render them as simple flat polygons).
- Again, useful to show the time cost of using normals/vertex.
- Pointless, of course, for the databases without polygonal patches.
-
- 4) No specular rendering: render without specular highlighting. Useless
- for the databases without specular highlighting.
-
- 5) Total cull and clip rendering: render with the lookat direction
- reversed. Change the at vector to be equal to ( 2*from - at ) and up
- to be -up. No objects will be displayed. This test gives a sense of
- the speed of the cull and clip hardware.
-
- Such a full battery of tests would yield some interesting results.
- Hardware could be compared in a large number of categories, allowing users to
- be able to select equipment optimized for their application. I feel the most
- important rates are the vectors/second and the fully rendered polygons/second,
- as these two display methods bracket the gamut of realism offered by standard
- display hardware.
-
-
- Ray-Tracer Testing:
-
- 1) Assume the same conditions apply as in the shaded hidden surface
- testing outline, except as noted below. Note that the non-polygon
- (OUTPUT_CURVES) format should be used for ray-tracing tests.
-
- 2) All opaque (non-transmitting) primitives can be considered one-sided
- for rendering purposes, similar to how polygons were considered one-
- -sided for hidden surface testing. Only the outside of sphere, cone,
- and cylinder primitives are viewed in the databases.
-
- 3) Render at a resolution of 512 x 512 pixels, shooting rays at the
- corners (meaning that 513 x 513 eye rays will be created). The four
- corner contributions are averaged to arrive at a pixel value. If this
- is not done, note this fact. No pixel subdivision is performed.
-
- 4) The maximum tree depth is 5 (where the eye ray is of depth 1).
-
- 5) All rays hitting only specular and transmitting objects spawn
- reflection rays, unless the maximum ray depth was reached by the
- spawning ray. No adaptive tree depth cutoff is allowed; that is, all
- rays must be spawned (adaptive tree depth is a proven time saver and
- is also dependent on the color model used).
-
- 6) All rays hitting transmitting objects spawn refraction rays, unless
- the maximum ray depth was reached or total internal reflection occurs.
- Transmitting rays should be refracted using Snell's (i.e. should not
- pass straight through an object). If total internal reflection occurs,
- then only a reflection ray is generated at this node.
-
- 7) A shadow ray is not generated if the surface normal points away from
- the light.
-
- 8) Assume no hierarchy is given with the database (for example, color
- change cannot be used to signal a clustering). The ray tracing program
- itself can create its own hierarchy, of course.
-
- 9) Timing costs should be separated into at least two areas: preprocessing
- and ray-tracing. Preprocessing includes all time spent initializing,
- reading the database, and creating data structures needed to ray-trace.
- Preprocessing should be all the constant cost operations--those that
- do not change with the resolution of the image. Ray-tracing is the
- time actually spent tracing the rays (i.e. everything that is not
- preprocessing).
-
- 10) Other timing costs which would be of interest is a breakdown of
- times spent in the preprocessing and for actual ray-tracing. Examples
- include time spent creating a hierarchy, octree, or item buffer, and
- times spent on intersection the various primitives and on calculating
- the color.
-
- One major complaint with simple timing tests is that they tell little about
- the actual algorithm performance per se. One partial remedy to this problem is
- to also include in statistics the number of object intersection tests performed
- and the number of hits recorded. This information is useful for comparing
- techniques. One example would be comparing various automatic hierarchy
- algorithms by using these statistics.
-
- Other information should be included in ray-trace timing results that is
- often overlooked or ignored. Some basic information about the system
- configuration should be made: average speed of machine in terms of mips,
- Mflops, or Vaxen (i.e. the HP 320 is about 75 Kflops (single), which is about
- 30% of the speed of a VAX 11/780 FPA running vms 4.1, which is rated at 250
- Kflops. However, an HP hardware engineer told me that the HP 320 is twice
- the speed of an 11/780 - it is unclear if this was incorrect. He might be
- right about overall machine speed in MIPS, for example); physical memory
- size (i.e. 6 Mbytes); special hardware such as a floating point accelerator or
- array processor, if different than the standard machine; system costs during
- tests (i.e. single user mode vs. time-share with other heavy users vs. serious
- operating system overhead); operating system (i.e. HP-UX version 5.2); and
- language used (i.e. optimized "C", single precision in general).
-
-
- Timings
- -------
-
- Rendering time for test set on HP-320SRX:
-
- Vector (seconds) | Z-buffer (seconds)
- 512 x 512 1024 x 1024 | 512 x 512 1024 x 1024
- balls 369.03 380.32 | 284.52 320.93
- gears 6.19 5.26 | 6.84 4.82
- mountain 4.83 4.37 | 3.84 2.88
- rings 232.28 243.65 | 181.28 202.11
- tetra 2.24 2.08 | 1.85 1.40
- tree 226.54 236.44 | 176.17 196.68
-
-
- Calculated performance of test set on HP-320SRX:
-
- Vector (vectors/second) | Z-buffer (polys/second)
- 512 x 512 1024 x 1024 | 512 x 512 1024 x 1024
- gears 8937 10513 | 1365 1940
- mountain 5563 6154 | 2331 3115
- tetra 5496 5908 | 2216 2925
- -------------------------------------------------------------------------------
- Display average 6665 7525 | 1971 2660
- (display average vector length for 512x512 is 10.4 pixels, 1024x1024 is 20.8)
- (display average polygon size for 512x512 is 87.4 pixels, 1024x1024 is 349.6)
-
-
- Vector (vectors/second) | Z-buffer (polys/second)
- 512 x 512 1024 x 1024 | 512 x 512 1024 x 1024
- balls 11521 11179 | 4981 4416
- rings 11572 11032 | 4819 4322
- tree 11569 11084 | 4835 4331
- -------------------------------------------------------------------------------
- Throughput ave. 11554 11098 | 4878 4356
- (throughput ave. vector length for 512x512 is 1.1 pixels, 1024x1024 is 2.3)
- (throughput ave. polygon size for 512x512 is 1.1 pixels, 1024x1024 is 4.4)
-
-
- The above times are computed from the average of 5 runs for each database.
- The timing increment is 0.02 seconds (50 Hz). Variance over the runs was
- quite small. Note that many of the tests at maximum square resolution
- (1024 x 1024) for the hidden surface timings are faster than those for the low
- resolution (512 x 512). The HP-320SRX is a workstation with 6 MByte memory and
- floating point accelerator. 24 bit color was used for display, and the full
- screen resolution is 1280 x 1024.
-
-
- Rendering time of the ray-traced test set on HP-320SRX (512 x 512 pixels):
-
- Setup Ray-Tracing | Polygon Sphere Cyl/Cone Bounding
- (hr:min:sec) | Tests Tests Tests Vol. Tests
- -------------------------------------------------------------------------------
- balls 13:24 4:36:48 | 1084K 6253K 0 41219K
- gears 13:49 10:32:51 | 12908K 0 0 92071K
- mountain 11:46 2:59:58 | 3807K 4193K 0 31720K
- rings 31:56 13:30:40 | 1203K 5393K 15817K 94388K
- tetra 4:04 15:40 | 342K 0 0 1491K
- tree 7:37 3:43:23 | 1199K 2218K 973K 20258K
-
- A typical set of ray tracing intersection statistics for the tetra database is:
-
- [these statistics should be the same for all users]
- image size: 512 x 512
- total number of pixels: 262144 [ 512 x 512 ]
- total number of trees generated: 263169 [ 513 x 513 ]
- total number of tree rays generated: 263169 [ no rays spawned ]
- number of eye rays which hit background: 213363 [ 81% ]
- average number of rays per tree: 1.000000
- average number of rays per pixel: 1.003910
- total number of shadow rays generated: 46150
-
- [these tests vary depending on the ray-tracing algorithm used]
- Ray/Item Tests Performed:
- Polygonal: 342149 ( 55495 hit - 16.2 % )
- Bounding Box: 1491181 ( 351754 hit - 23.6 % )
- Total: 1833330 ( 407249 hit - 22.2 % )
-
- Setup times are dominated by a verbose data format which causes a massive
- (usually tenfold) increase in size from the NFF file format to our in-house
- format. It should also be noted that for my sphere and bounding volume tests
- there are more tests than are strictly needed. This is because I require each
- light to be attached to an object (a sphere), which leads to extra testing in
- both of these categories.
-
- For what it's worth, my ray-tracer is based on hierarchical bounding boxes
- generated using Goldsmith & Salmon's automatic hierarchy method (see IEEE CG&A
- May 1987) and uses an item buffer (no light buffer yet) and shadow coherency.
- Something to think about is how the octree, SEADS, and other such algorithms
- perform when the background polygon dimensions are changed (thus changing the
- size of the outermost enclosing box, which changes the octree encoding of the
- environment). Same question with the effects on the algorithms of moving light
- source positions along the line defined by the light and "lookat" positions.
-
-
- Future Work
- -----------
-
- These databases are not meant to be the ultimate in standards, but are
- presented as a first attempt at providing somewhat representative modelled
- environments. A number of extensions to the file format should be provided
- someday, along with new database generators which use them. The present
- databases do not contain holed polygons, spline patches, polygonal mesh or
- triangular strip data structures, or other proposed PHIGS+ extensions.
- Modeling matrices are not output, and CSG combined primitives are not included.
-
- As far as database geometry is concerned, most scenes have a preponderance
- of small primitives, and in general very few objects ever get clipped. If you
- find that these databases do not reflect the type of environments you render,
- please write and explain why (or better yet, write one or more programs that
- will generate your "typical" environments--maybe it will get put in the next
- release).
-
-
- Acknowledgements
- ----------------
-
- I originally heard of this idea from Don Greenberg back in 1984. Some time
- earlier he and Ed Catmull had talked over coming up with some standard
- databases for testing algorithmic claims, and to them must go the credit for
- the basic concept. Many thanks to the reviewers, listed alphabetically: Kells
- Elmquist, Jeff Goldsmith, Donald Greenberg, Susan Spach, Rick Speer, K.R.
- Subramanian, John Wallace, and Louise Watson. Other people who have freely
- offered their ideas and opinions on this project include Brian Barsky, Andrew
- Glassner, Roy Hall, Chip Hatfield, Tim Kay, John Recker, Paul Strauss, and Chan
- Verbeck. These names are mentioned mostly as a listing of people interested in
- this idea. They do not necessarily agree (and in some cases strongly disagree)
- with the validity of the concept or the choice of databases.
-
- Your comments and suggestions on these databases are appreciated. Please
- send any timing results for software and hardware which you test.
-